Self-Driving Car Engineer Nanodegree

Deep Learning

Project: Build a Traffic Sign Recognition Classifier

In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.

Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.

The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.


Step 0: Load The Data

In [1]:
# Load packages
import pickle
import matplotlib.pyplot as plt
from random import randrange
import numpy as np
import pandas as pd

from sklearn.utils import shuffle
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import train_test_split

import matplotlib.gridspec as gridspec
import cv2
import tensorflow as tf

# Visualizations will be shown in the notebook.
%matplotlib inline

if 'session' in locals() and session is not None:
    print('Close interactive session')
    session.close()
In [2]:
# Load pickled data
# TODO: Fill this in based on where you saved the training and testing data

training_file = 'traffic-signs-data/train.p'
validation_file= 'traffic-signs-data/valid.p'
testing_file = 'traffic-signs-data/test.p'

with open(training_file, mode='rb') as f:
    train = pickle.load(f)
with open(validation_file, mode='rb') as f:
    valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
    test = pickle.load(f)
    
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']

Step 1: Dataset Summary & Exploration

The pickled data is a dictionary with 4 key/value pairs:

  • 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
  • 'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
  • 'sizes' is a list containing tuples, (width, height) representing the the original width and height the image.
  • 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES

Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.

Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas

In [3]:
### Replace each question mark with the appropriate value. 
### Use python, pandas or numpy methods rather than hard coding the results

# TODO: Number of training examples
n_train = len(X_train)

# TODO: Number of testing examples.
n_valid = len(X_valid)

# TODO: Number of testing examples.
n_test = len(X_test)



# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape

# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_train))

n_classes_test = len(np.unique(y_test))

n_classes_valid = len(np.unique(y_valid))



print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Number of validation  examples =", n_valid)
print("Image data shape =", image_shape)
print("Number of classes train dataset =", n_classes)
print("Number of classes test dataset =", n_classes)
print("Number of classes validation dataset =", n_classes)
Number of training examples = 34799
Number of testing examples = 12630
Number of validation  examples = 4410
Image data shape = (32, 32, 3)
Number of classes train dataset = 43
Number of classes test dataset = 43
Number of classes validation dataset = 43

Include an exploratory visualization of the dataset

Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.

The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.

NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.

In [4]:
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.

signnames_file = 'signnames.csv'
signnames_df = pd.read_csv(signnames_file)
signnames_df.columns=['id','sign_name']
signnames_df
Out[4]:
id sign_name
0 0 Speed limit (20km/h)
1 1 Speed limit (30km/h)
2 2 Speed limit (50km/h)
3 3 Speed limit (60km/h)
4 4 Speed limit (70km/h)
5 5 Speed limit (80km/h)
6 6 End of speed limit (80km/h)
7 7 Speed limit (100km/h)
8 8 Speed limit (120km/h)
9 9 No passing
10 10 No passing for vehicles over 3.5 metric tons
11 11 Right-of-way at the next intersection
12 12 Priority road
13 13 Yield
14 14 Stop
15 15 No vehicles
16 16 Vehicles over 3.5 metric tons prohibited
17 17 No entry
18 18 General caution
19 19 Dangerous curve to the left
20 20 Dangerous curve to the right
21 21 Double curve
22 22 Bumpy road
23 23 Slippery road
24 24 Road narrows on the right
25 25 Road work
26 26 Traffic signals
27 27 Pedestrians
28 28 Children crossing
29 29 Bicycles crossing
30 30 Beware of ice/snow
31 31 Wild animals crossing
32 32 End of all speed and passing limits
33 33 Turn right ahead
34 34 Turn left ahead
35 35 Ahead only
36 36 Go straight or right
37 37 Go straight or left
38 38 Keep right
39 39 Keep left
40 40 Roundabout mandatory
41 41 End of no passing
42 42 End of no passing by vehicles over 3.5 metric ...
In [5]:
# Show samples of images
df = pd.read_csv('signnames.csv')
print('\n'*2 + 'Sample images' + '\n'*2)
for i in range(5):
    k = randrange(n_train)
    label = 'Exapmle: ' + str(k) +'\nTrafic sign: ' + str(y_train[k]) + '\nSign Name: ' + str(df.loc[y_train[k], "SignName"]) 
    
    plt.xlabel(label)
    plt.imshow(X_train[k])
    plt.figure()

Sample images


<matplotlib.figure.Figure at 0x2276ee7a4a8>
In [6]:
# Show the distribution of classes in the training and test sets
# the histogram of the data
plt.figure(1)
#n, bins, patches = 
plt.hist(y_train, 50, facecolor='blue', alpha=0.5)

plt.xlabel('Training Classes')
plt.ylabel('Number of Trainging Samples')
plt.title('Training Samples Distribution')
plt.grid(True)

plt.figure(2)
#n, bins, patches = 
plt.hist(y_test, 50, facecolor='red', alpha=0.5)

plt.xlabel('Test Classes')
plt.ylabel('Number of Test Samples')
plt.title('Test Samples Distribution')
plt.grid(True)

plt.show()

Step 2: Design and Test a Model Architecture

Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.

There are various aspects to consider when thinking about this problem:

  • Neural network architecture
  • Play around preprocessing techniques (normalization, rgb to grayscale, etc)
  • Number of examples per label (some have more than others).
  • Generate fake data.

Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.

NOTE: The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!

Pre-process the Data Set (normalization, grayscale, etc.)

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.

Pre-process the Data Set

To pre-process the dataset I normalized the image data. Input data normalization is very important to construct a neural network model. It convert all features (pixels) to the same scale. Normalized data helps Gradient Descent or similar algorithms to work quickly.

Normalization implemented at min_max_normalization function. I used mix max normalization min=-0.5, max=0.5). In my case Min Max scaling works better than other algoritms.

One-Hot Encoding was used to convert label numbers to vectors. One-Hot encoding implemented in one_hot() function. It uses tensorflow fuction tf.one_hot (see bellow)

Also I shuffle training examples and split train dataset (rate 80/20)

In [7]:
### Preprocess the data here. Preprocessing steps could include normalization, converting to grayscale, etc.
### Feel free to use as many code cells as needed.

from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split




X_train_original = X_train
X_test_original = X_test
y_train_original = y_train
y_test_original = y_test




# Grayscale images
#def image_grayscale(image):
#    images = tf.image.rgb_to_grayscale(image)
#    images = tf.Session().run(image)
#    return image

# Normalise input images

def image_normalize(image):
    #images = (images - images.mean()) / (np.max(images) - np.min(images))
    #X_test = (X_test - X_test.mean()) / (np.max(X_test) - np.min(X_test))
    #image = cv2.cvtColor(image, cv2.COLOR_BGR2YUV)
    #image = image[:,:,0]
    image[:,:,0] = cv2.equalizeHist(image[:,:,0])
    image[:,:,1] = cv2.equalizeHist(image[:,:,1])
    image[:,:,2] = cv2.equalizeHist(image[:,:,2])
    image = image/255.-.5
    #image = cv2.resize(image, (img_resize,img_resize),interpolation = cv2.INTER_CUBIC)
    
    return image

def min_max_normalization(data, a=-0.5, b=0.5):
    data_max = np.max(data)
    data_min = np.min(data)
    return a + (b - a) * ((data - data_min) / (data_max - data_min))

#def one_hot_encoding(y_train, y_test):
#    labelBinarizer = LabelBinarizer()
#    labelBinarizer.fit(y_train)

#    y_train_one_hot = labelBinarizer.transform(y_train)
#    y_test_one_hot = labelBinarizer.transform(y_test)
#    return y_train_one_hot, y_test_one_hot


#X_train_preprocessed = image_grayscale(X_train)
X_train = min_max_normalization(X_train)
X_test = min_max_normalization(X_test)
#X_test_grayscale = image_grayscale(X_test)

# Step: 2 One-Hot-Encoding
#y_train, y_test = one_hot_encoding(y_train, y_test)

# Shuffle training examples
X_train, y_train = shuffle(X_train, y_train)
X_test, y_test = shuffle(X_test, y_test)
In [8]:
# Show samples of grayscaled and normalized images

number_samples = 9
random_sample = np.random.randint(0, n_train, number_samples)
sample_images = X_train[random_sample]
sample_labels = list(y_train[random_sample])
#sample_labels = list(map(label_descriptions.get, y_train[random_sample]))
#sample_labels = list(y_train[random_sample])


fig, axes = plt.subplots(nrows=3, ncols=3, figsize=(2*3, 2*3))
axes = axes.ravel()
fig.tight_layout()

for ax, img, label in zip(axes, sample_images, sample_labels):
    ax.imshow(img)
    #ax.set_title(label)
    ax.axis('off')

Model Architecture

Define your architecture here.

Feel free to use as many code cells as needed.

I used an architecture model LeNet. I selected this model because it is Well-known model, widely udes for computer vision tasks and rather fast to train on slow computers (because I have unrecoverable tensorflow error when use GPU in some kind of models) At the initial stage I didn't make augmented data, but constructed functions for data augmentation and could implement it later. Even without these fucntions I have 96.6% accuracy.

We use image augmentation to generate additional data to train model. We used function proposed by Vivek Yadav. Function used brightness augmentation and transformation of images (rotation, translation, shear, brightness).

See additional information:

https://medium.com/@vivek.yadav/dealing-with-unbalanced-data-generating-additional-data-by-jittering-the-original-image-7497fe2119c3#.c38crecmo

https://github.com/vxy10/ImageAugmentation

Image augmentation to generate additional data to train.

def augment_brightness_camera_images(image): image1 = cv2.cvtColor(image,cv2.COLOR_RGB2HSV) random_bright = .25+np.random.uniform()

#print(random_bright)
image1[:,:,2] = image1[:,:,2]*random_bright
image1 = cv2.cvtColor(image1,cv2.COLOR_HSV2RGB)
return image1

def transform_image(img,ang_range,shear_range,trans_range,brightness=0): ''' This function transforms images to generate new images. The function takes in following arguments, 1- Image 2- ang_range: Range of angles for rotation 3- shear_range: Range of values to apply affine transform to 4- trans_range: Range of values to apply translations over.

A Random uniform distribution is used to generate different parameters for transformation

'''
# Rotation

ang_rot = np.random.uniform(ang_range)-ang_range/2
rows,cols,ch = img.shape    
Rot_M = cv2.getRotationMatrix2D((cols/2,rows/2),ang_rot,1)

# Translation
tr_x = trans_range*np.random.uniform()-trans_range/2
tr_y = trans_range*np.random.uniform()-trans_range/2
Trans_M = np.float32([[1,0,tr_x],[0,1,tr_y]])

# Shear
pts1 = np.float32([[5,5],[20,5],[5,20]])

pt1 = 5+shear_range*np.random.uniform()-shear_range/2
pt2 = 20+shear_range*np.random.uniform()-shear_range/2

# Brightness


pts2 = np.float32([[pt1,5],[pt2,pt1],[5,pt2]])

shear_M = cv2.getAffineTransform(pts1,pts2)

img = cv2.warpAffine(img,Rot_M,(cols,rows))
img = cv2.warpAffine(img,Trans_M,(cols,rows))
img = cv2.warpAffine(img,shear_M,(cols,rows))

if brightness == 1:
  img = augment_brightness_camera_images(img)

return img

print ('Show sample of image augmentation')

k = randrange(n_train) label = 'Exapmle: ' + str(k) +'\nTrafic sign: ' + str(y_train[k]) + '\nSign Name: ' + str(df.loc[y_train[k], "SignName"]) plt.xlabel(label) image = X_train[k] plt.imshow(image);

gs1 = gridspec.GridSpec(5, 5) gs1.update(wspace=0.01, hspace=0.02) # set the spacing between axes. plt.figure(figsize=(12,12)) for i in range(25): ax1 = plt.subplot(gs1[i]) ax1.set_xticklabels([]) ax1.set_yticklabels([]) ax1.set_aspect('equal') img = transform_image(image,20,10,5,brightness=1)

plt.subplot(5,5,i+1)
plt.imshow(img)
plt.axis('off')

plt.show()

Train, Validate and Test the Model

A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.

Train your model here.

Calculate and report the accuracy on the training and validation set.

Once a final model architecture is selected,

the accuracy on the test set should be calculated and reported as well.

Feel free to use as many code cells as needed.

Hyperparameters: I chose a learning rate of 0.0005, batch size of 128 and ran optimization for a total of 28 epochs. Epochs : I ran a total of 28 epochs for training the neural network. Stopping conditions : I used accuracy of validation data as a criteria to monitor if model was overfitting. After 28 epoches accuracy of validation changes minimally or drops. Optimization : I used adamoptimizer with default settings for optimization.

LeNet Model:

Layer 1 Convolutional. Input = 32x32x1. Output = 28x28x6. This layer transforms the Tensor 32x32x1 to 28x28x6. Use a filter with the shape (5, 5, 1, 6) with Valid padding. Use a standard ReLU activation. Use max pooling. Input = 10x10x16. Output = 5x5x16.

Layer 2 Convolutional. Output = 10x10x16. This layer transforms the Tensor 5x5x16 to 10x10x16. Valid padding Use a standard ReLU activation. Use max pooling. Input = 10x10x16. Output = 5x5x16. Use flatten. Input = 5x5x16. Output = 400

Layer 3 Fully Connected. Input = 400. Output = 120. Use a standard ReLU activation.

Layer 4 Fully Connected. Input = 120. Output = 84 Use a standard ReLU activation.

Layer 5 Fully Connected. Input = 84. Output = 43.

In [9]:
X_train, X_validation, y_train, y_validation = train_test_split(X_train, y_train, test_size=0.2)
In [10]:
import tensorflow as tf
from tensorflow.contrib.layers import flatten

EPOCHS = 28
BATCH_SIZE = 128
In [11]:
# One hot
#from sklearn.preprocessing import OneHotEncoder

def one_hot():
    with tf.device('/cpu:0'):
        x = tf.placeholder(tf.float32, (None, 32, 32, 3))
        y = tf.placeholder(tf.int32, (None))
        one_hot_y = tf.one_hot(y, n_classes)
    return

#enc = OneHotEncoder()
#enc.fit(y_train)
#one_hot_sklearn = enc.transform(y_train).toarray()
In [ ]:
 
In [12]:
# LeNet implementation

def LeNet(x):    
    # Hyperparameters
    mu = 0
    sigma = 0.1

    with tf.device('/cpu:0'):
        # Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
        conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, 6), mean = mu, stddev = sigma))
        conv1_b = tf.Variable(tf.zeros(6))
        conv1   = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b

        # Activation.
        conv1 = tf.nn.relu(conv1)

        # Pooling. Input = 28x28x6. Output = 14x14x6.
        conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')

        # Layer 2: Convolutional. Output = 10x10x16.
        conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
        conv2_b = tf.Variable(tf.zeros(16))
        conv2   = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b

        # Activation.
        conv2 = tf.nn.relu(conv2)

        # Pooling. Input = 10x10x16. Output = 5x5x16.
        conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')

        # Flatten. Input = 5x5x16. Output = 400.
        fc0   = flatten(conv2)

        # Layer 3: Fully Connected. Input = 400. Output = 120.
        fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
        fc1_b = tf.Variable(tf.zeros(120))
        fc1   = tf.matmul(fc0, fc1_W) + fc1_b

        # Activation.
        fc1    = tf.nn.relu(fc1)

        # Layer 4: Fully Connected. Input = 120. Output = 84.
        fc2_W  = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
        fc2_b  = tf.Variable(tf.zeros(84))
        fc2    = tf.matmul(fc1, fc2_W) + fc2_b

        # Activation.
        fc2    = tf.nn.relu(fc2)

        # Layer 5: Fully Connected. Input = 84. Output = 43.
        fc3_W  = tf.Variable(tf.truncated_normal(shape=(84, n_classes), mean = mu, stddev = sigma))
        fc3_b  = tf.Variable(tf.zeros(n_classes))
        logits = tf.matmul(fc2, fc3_W) + fc3_b

    return logits
In [13]:
with tf.device('/cpu:0'):
    x = tf.placeholder(tf.float32, (None, 32, 32, 3))
    y = tf.placeholder(tf.int32, (None))
    one_hot_y = tf.one_hot(y, n_classes)
In [14]:
rate = 0.0005
logits = LeNet(x)
with tf.device('/cpu:0'):
    cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels = one_hot_y, logits = logits)
    
    loss_operation = tf.reduce_mean(cross_entropy)
    optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
In [15]:
with tf.device('/cpu:0'):
    correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
    accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    saver = tf.train.Saver()

def evaluate(X_data, y_data):
   
    num_examples = len(X_data)
    total_accuracy = 0
    sess = tf.get_default_session()
    for offset in range(0, num_examples, BATCH_SIZE):
        batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
        accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
        total_accuracy += (accuracy * len(batch_x))
    return total_accuracy / num_examples
In [16]:
with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:

    
    sess.run(tf.global_variables_initializer())
    num_examples = len(X_train)
    
    print("Training...")
    print()
    for i in range(EPOCHS):
        X_train, y_train = shuffle(X_train, y_train)
        for offset in range(0, num_examples, BATCH_SIZE):
            end = offset + BATCH_SIZE
            batch_x, batch_y = X_train[offset:end], y_train[offset:end]
            sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
            
        validation_accuracy = evaluate(X_validation, y_validation)
        print("EPOCH {} ...".format(i+1))
        print("Validation Accuracy = {:.3f}".format(validation_accuracy))
        print()
        
    saver.save(sess, './models/lenet')
    print("Model saved")
Training...

EPOCH 1 ...
Validation Accuracy = 0.544

EPOCH 2 ...
Validation Accuracy = 0.792

EPOCH 3 ...
Validation Accuracy = 0.856

EPOCH 4 ...
Validation Accuracy = 0.893

EPOCH 5 ...
Validation Accuracy = 0.915

EPOCH 6 ...
Validation Accuracy = 0.944

EPOCH 7 ...
Validation Accuracy = 0.938

EPOCH 8 ...
Validation Accuracy = 0.951

EPOCH 9 ...
Validation Accuracy = 0.956

EPOCH 10 ...
Validation Accuracy = 0.957

EPOCH 11 ...
Validation Accuracy = 0.965

EPOCH 12 ...
Validation Accuracy = 0.965

EPOCH 13 ...
Validation Accuracy = 0.969

EPOCH 14 ...
Validation Accuracy = 0.972

EPOCH 15 ...
Validation Accuracy = 0.968

EPOCH 16 ...
Validation Accuracy = 0.976

EPOCH 17 ...
Validation Accuracy = 0.971

EPOCH 18 ...
Validation Accuracy = 0.978

EPOCH 19 ...
Validation Accuracy = 0.977

EPOCH 20 ...
Validation Accuracy = 0.975

EPOCH 21 ...
Validation Accuracy = 0.976

EPOCH 22 ...
Validation Accuracy = 0.979

EPOCH 23 ...
Validation Accuracy = 0.977

EPOCH 24 ...
Validation Accuracy = 0.976

EPOCH 25 ...
Validation Accuracy = 0.972

EPOCH 26 ...
Validation Accuracy = 0.982

EPOCH 27 ...
Validation Accuracy = 0.982

EPOCH 28 ...
Validation Accuracy = 0.980

Model saved

Validation Accuracy = 98.0%


Step 3: Test a Model on New Images

To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.

You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.

Load and Output the Images

In [17]:
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import glob


myTestImages =  []
rgbNormalization = True
print ("List of files:")
for fileName in glob.glob('./new-signs/*.*'):
    print(fileName)
    img = cv2.imread(fileName)
    
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # convert bgr to rgb
    
    img = cv2.resize(img,(32,32)) # resize my images 
    
    #if rgbNormalization == True:
    #img = cv2.normalize(img, None, 0.0, 1.1, cv2.NORM_MINMAX, cv2.CV_32F)
    #img = cv2.normalize(img, None, -0.5, 0.5, cv2.NORM_MINMAX, cv2.CV_32F)
    #plt.imshow(img)
    myTestImages.append(img)

  
myTestImagesNP = np.array(myTestImages)

#myTestImagesNP = (myTestImagesNP - myTestImagesNP.mean()) / (np.max(myTestImagesNP) - np.min(myTestImagesNP))

n_new = len(myTestImagesNP)
print('\n', "Number of new image examples =", n_new)
print('\n', "Shape", myTestImagesNP.shape)
      
List of files:
./new-signs\50_32.jpg
./new-signs\road_work1_32.jpg
./new-signs\road_work2_32.jpg
./new-signs\roundabout_32.jpg
./new-signs\stop2_32.jpg
./new-signs\stop_32.jpg
./new-signs\warning_32.jpg

 Number of new image examples = 7

 Shape (7, 32, 32, 3)
In [18]:
# Show new images
print('\n'*2 + 'New images' + '\n'*2)
for i in range(7):
    
    #k = randrange(n_train)
    #label = 'Exapmle: ' + str(k) +'\nTrafic sign: ' + str(y_train[k]) + '\nSign Name: ' + str(df.loc[y_train[k], "SignName"]) 
    
    #plt.xlabel(label)
    plt.imshow(myTestImagesNP[i])
    plt.figure()

New images


<matplotlib.figure.Figure at 0x22712dd0780>
In [19]:
# Show samples of images

# Normalise input (images still in colour)
#myTestImagesNP = (myTestImagesNP - myTestImagesNP.mean()) / (np.max(myTestImagesNP) - np.min(myTestImagesNP))
myTestImagesNP = min_max_normalization(myTestImagesNP)

number_samples = 7
random_sample = np.random.randint(0, n_new, number_samples)
print (random_sample)
sample_images = myTestImagesNP[random_sample]
#sample_labels = list(map(label_descriptions.get, y_train[random_sample]))
#sample_labels = list(y_train[random_sample])


fig, axes = plt.subplots(nrows=3, ncols=3, figsize=(2*3, 2*3))
axes = axes.ravel()
fig.tight_layout()

for ax, img, label in zip(axes, sample_images, sample_labels):
    ax.imshow(img)
    ax.set_title(label)
    ax.axis('off')
[2 0 0 1 3 1 4]
In [ ]:
 

Predict the Sign Type for Each Image

In [20]:
from random import randrange
X_predict=[]
y_predict=[]
for i in range(20):
    r = randrange(len(X_test))
    X_predict.append(X_test_original[r])
    y_predict.append(y_test_original[r])

predict_count=len(X_predict)
y_predict_df = pd.DataFrame(y_predict)
y_predict_df.columns=['id']
In [ ]:
 
In [21]:
print("Number of predictions: %d" % (len(X_predict)))
import tensorflow as tf
keep_prob = 1.0
with tf.Session() as sess:
    # Restore variables from disk.
    saver.restore(sess, './models/lenet')
    
    print("Model restored.")
   
    #ps=sess.run(tf.argmax(logits,1),feed_dict={x:X_predict, keep_prob: 1.0})
    ps=sess.run(tf.argmax(logits,1),feed_dict={x:X_predict})
    
    
print ('\n',[(p,signnames_df.iloc[p].sign_name) for p in ps])
Number of predictions: 20
Model restored.

 [(34, 'Turn left ahead'), (1, 'Speed limit (30km/h)'), (9, 'No passing'), (25, 'Road work'), (10, 'No passing for vehicles over 3.5 metric tons'), (28, 'Children crossing'), (4, 'Speed limit (70km/h)'), (11, 'Right-of-way at the next intersection'), (2, 'Speed limit (50km/h)'), (31, 'Wild animals crossing'), (13, 'Yield'), (17, 'No entry'), (5, 'Speed limit (80km/h)'), (9, 'No passing'), (38, 'Keep right'), (35, 'Ahead only'), (31, 'Wild animals crossing'), (11, 'Right-of-way at the next intersection'), (2, 'Speed limit (50km/h)'), (11, 'Right-of-way at the next intersection')]
In [22]:
# Show samples of images
print('\n'*2 + 'Sample images' + '\n'*2)
for i in range(predict_count):
    #print ('\n',(p,signnames_df.iloc[ps[i]].sign_name))
    #k = randrange(n_train)
    #label = 'Exapmle: ' + str(k) +'\nTrafic sign: ' + str(y_train[k]) + '\nSign Name: ' + str(df.loc[y_train[k], "SignName"]) 
    label = (signnames_df.iloc[ps[i]].sign_name)
    plt.xlabel(label)
    plt.imshow(X_predict[i])
    plt.figure()

Sample images


C:\Anaconda3\envs\sdc\lib\site-packages\matplotlib\pyplot.py:524: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  max_open_warning, RuntimeWarning)
<matplotlib.figure.Figure at 0x2276f167c18>
In [ ]:
 
In [23]:
from random import randrange
X_predict=[]
y_predict=[]
predict_count = len(myTestImagesNP)
for i in range(len(myTestImagesNP)):
    #r = randrange(len(myTestImagesNP))
    X_predict.append(myTestImagesNP[i])
#    y_predict.append(y_test_original[r])
In [24]:
fig = plt.figure(figsize=(20, predict_count), dpi=200)

for i in range(predict_count):
    plt.subplot(4, 4, i+1)
    plt.title(y_predict_df.iloc[i].id)
    plt.imshow(X_predict[i])
    plt.xticks([]), plt.yticks([])
plt.show()
In [25]:
print("Number of predictions: %d" % (len(X_predict)))
import tensorflow as tf
keep_prob = 1.0
with tf.Session() as sess:
    # Restore variables from disk.
    saver.restore(sess, './models/lenet')
    print("Model restored.")
   
    #ps=sess.run(tf.argmax(logits,1),feed_dict={x:X_predict, keep_prob: 1.0})
    ps=sess.run(tf.argmax(logits,1),feed_dict={x:X_predict})
    
    
print ('\n',[(p,signnames_df.iloc[p].sign_name) for p in ps])
Number of predictions: 7
Model restored.

 [(9, 'No passing'), (25, 'Road work'), (31, 'Wild animals crossing'), (38, 'Keep right'), (14, 'Stop'), (14, 'Stop'), (18, 'General caution')]
In [26]:
# Show samples of images
print('\n'*2 + 'Sample images' + '\n'*2)
for i in range(predict_count):
    #print ('\n',(p,signnames_df.iloc[ps[i]].sign_name))
    #k = randrange(n_train)
    #label = 'Exapmle: ' + str(k) +'\nTrafic sign: ' + str(y_train[k]) + '\nSign Name: ' + str(df.loc[y_train[k], "SignName"]) 
    label = (signnames_df.iloc[ps[i]].sign_name)
    plt.xlabel(label)
    plt.imshow(X_predict[i])
    plt.figure()

Sample images


<matplotlib.figure.Figure at 0x22712996ac8>

Analyze Performance

In [33]:
### Calculate the accuracy for these 5 new images. 
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
matches=[False, True, False, False, True, True, True]
print("Accuracy %d%%" % (matches.count(True)/len(matches) *100))
Accuracy 57%

Output Top 5 Softmax Probabilities For Each Image Found on the Web

For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.

The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.

tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.

Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability:

# (5, 6) array
a = np.array([[ 0.24879643,  0.07032244,  0.12641572,  0.34763842,  0.07893497,
         0.12789202],
       [ 0.28086119,  0.27569815,  0.08594638,  0.0178669 ,  0.18063401,
         0.15899337],
       [ 0.26076848,  0.23664738,  0.08020603,  0.07001922,  0.1134371 ,
         0.23892179],
       [ 0.11943333,  0.29198961,  0.02605103,  0.26234032,  0.1351348 ,
         0.16505091],
       [ 0.09561176,  0.34396535,  0.0643941 ,  0.16240774,  0.24206137,
         0.09155967]])

Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:

TopKV2(values=array([[ 0.34763842,  0.24879643,  0.12789202],
       [ 0.28086119,  0.27569815,  0.18063401],
       [ 0.26076848,  0.23892179,  0.23664738],
       [ 0.29198961,  0.26234032,  0.16505091],
       [ 0.34396535,  0.24206137,  0.16240774]]), indices=array([[3, 0, 5],
       [0, 1, 4],
       [0, 5, 1],
       [1, 3, 5],
       [1, 4, 3]], dtype=int32))

Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.

In [28]:
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. 
### Feel free to use as many code cells as needed.
def softmax_image_explore(timage):
    with tf.Session() as sess:
        # Restore variables from disk.
        saver.restore(sess, './models/lenet')
        #saver.restore(sess, os.getcwd() + "/cnn-run")
        print("Model restored.")
    
        image_logits=sess.run(logits,feed_dict={x:[timage]})
        #image_logits=sess.run(logits,feed_dict={x:[timage]})
        #, keep_prob: 1.0
        print(image_logits)
            
        softmax=sess.run(tf.nn.softmax(image_logits))
        print(softmax)

        # experimented with softmax calculations 
        # per https://carnd-forums.udacity.com/questions/12619143/one-reason-for-low-accuracy-ill-conditioned-value-for-log-calculation
        # eventually realised I'd not normalised my captured test images 
        # - hence it was giving me large logit values
        image_logits -= np.max(image_logits)
        print(image_logits)

        predictions=sess.run(tf.nn.softmax(image_logits))
        print(predictions)

        probs = np.exp(image_logits) / np.sum(np.exp(image_logits))
        print(probs)
In [29]:
print(X_predict[1].shape)
for i in range(7):
    softmax_image_explore((X_predict[i]))
(32, 32, 3)
Model restored.
[[ -15.8092823    -7.00054741   12.08781815  -28.6817646   -48.34588623
   -24.87469101  -74.03235626   -4.50142241  -65.23616028   13.45570755
    11.20200539  -11.33777523    9.92695045  -37.60940933  -29.32737923
   -50.2240448     0.96924937   -5.34635687  -26.14115715   -9.21755219
    -5.9143219   -20.01231194  -70.9289093   -11.84158516  -75.9213028
   -41.56445694  -44.80493546  -44.51435471  -33.71873093  -55.95384598
   -29.20291519    3.82796192  -35.08237839  -85.43016052 -113.15419769
   -64.90278625  -55.53981018  -31.23772049  -61.6763382   -73.34302521
   -13.79310322  -10.69383526  -32.64577103]]
[[  1.40492747e-13   9.40240219e-10   1.83320358e-01   3.60748923e-19
    1.04036646e-27   1.62404075e-17   7.27264654e-39   1.14444481e-08
    4.80656914e-35   7.19908834e-01   7.55974427e-02   1.22914838e-11
    2.11231057e-02   4.78606960e-23   1.89155152e-19   1.59042224e-28
    2.71942940e-06   4.91636909e-09   4.57697438e-18   1.02424985e-10
    2.78599654e-09   2.10040073e-15   1.61996526e-37   7.42681958e-12
    0.00000000e+00   9.16903581e-25   3.58923361e-26   4.79953729e-26
    2.34249774e-21   5.16525054e-31   2.14226415e-19   4.74242770e-05
    5.99038353e-22   0.00000000e+00   0.00000000e+00   6.70838038e-35
    7.81459617e-31   2.80006647e-20   1.68982970e-33   1.44898928e-38
    1.05504201e-12   2.34025889e-11   6.84951430e-21]]
[[ -29.26498985  -20.45625496   -1.3678894   -42.13747406  -61.80159378
   -38.33039856  -87.48806763  -17.95713043  -78.69186401    0.
    -2.25370216  -24.79348373   -3.5287571   -51.06511688  -42.78308868
   -63.67975235  -12.48645782  -18.8020649   -39.59686279  -22.67325974
   -19.37002945  -33.46801758  -84.38461304  -25.29729271  -89.37701416
   -55.02016449  -58.26064301  -57.97006226  -47.17443848  -69.40955353
   -42.65862274   -9.62774563  -48.53808594  -98.88586426 -126.60990906
   -78.35848999  -68.99551392  -44.69342804  -75.13204956  -86.79873657
   -27.24880981  -24.14954376  -46.10147858]]
[[  1.40492747e-13   9.40240219e-10   1.83320358e-01   3.60748923e-19
    1.04036646e-27   1.62404075e-17   7.27264654e-39   1.14444481e-08
    4.80656914e-35   7.19908834e-01   7.55974427e-02   1.22914838e-11
    2.11231057e-02   4.78606960e-23   1.89155152e-19   1.59042224e-28
    2.71942940e-06   4.91636909e-09   4.57697438e-18   1.02424985e-10
    2.78599654e-09   2.10040073e-15   1.61996526e-37   7.42681958e-12
    0.00000000e+00   9.16903581e-25   3.58923361e-26   4.79953729e-26
    2.34249774e-21   5.16525054e-31   2.14226415e-19   4.74242770e-05
    5.99038353e-22   0.00000000e+00   0.00000000e+00   6.70838038e-35
    7.81459617e-31   2.80006647e-20   1.68982970e-33   1.44898928e-38
    1.05504201e-12   2.34025889e-11   6.84951430e-21]]
[[  1.40492747e-13   9.40240330e-10   1.83320373e-01   3.60748949e-19
    1.04036656e-27   1.62404075e-17   7.27264795e-39   1.14444489e-08
    4.80656942e-35   7.19908893e-01   7.55974501e-02   1.22914847e-11
    2.11231075e-02   4.78606991e-23   1.89155165e-19   1.59042236e-28
    2.71942940e-06   4.91636953e-09   4.57697479e-18   1.02424992e-10
    2.78599677e-09   2.10040073e-15   1.61996537e-37   7.42682044e-12
    1.09984974e-39   9.16903680e-25   3.58923392e-26   4.79953760e-26
    2.34249795e-21   5.16525148e-31   2.14226440e-19   4.74242806e-05
    5.99038404e-22   8.12753109e-44   0.00000000e+00   6.70838096e-35
    7.81459664e-31   2.80006680e-20   1.68982988e-33   1.44898942e-38
    1.05504212e-12   2.34025906e-11   6.84951430e-21]]
Model restored.
[[ -2.78628368e+01  -2.14309368e+01  -2.93700924e+01  -2.86140423e+01
   -7.59281082e+01  -8.91178906e-01  -1.66485291e+01  -4.99866829e+01
   -5.90227127e+01  -4.43795738e+01  -2.02669449e+01   8.52090168e+00
   -4.06967468e+01  -4.73991508e+01  -6.06463850e-02  -3.73736877e+01
   -7.96168671e+01  -5.13966789e+01  -1.64513817e+01  -3.57648087e+01
    7.26260066e-01  -2.02204037e+01  -2.69107795e+00   5.13892794e+00
    6.47153759e+00   4.64398079e+01  -8.69485474e+00  -8.18831825e+00
   -2.01577950e+00   3.06115780e+01   1.78024216e+01   2.56814623e+00
   -2.09347458e+01  -1.00597328e+02  -4.23610001e+01  -3.91691818e+01
   -4.32450485e+01  -8.85023270e+01  -3.88655205e+01  -1.59892166e+02
   -8.86754837e+01  -3.73920441e+01  -4.02666512e+01]]
[[  5.37987471e-33   3.34280350e-30   1.19173417e-33   2.53821134e-33
    0.00000000e+00   2.78236356e-21   3.99101483e-28   0.00000000e+00
    0.00000000e+00   0.00000000e+00   1.07059113e-29   3.40429986e-17
    1.43574308e-38   0.00000000e+00   6.38424257e-21   3.98348258e-37
    0.00000000e+00   0.00000000e+00   4.86075107e-28   1.99061348e-36
    1.40235790e-20   1.12160187e-29   4.59967603e-22   1.15679254e-18
    4.38531828e-18   9.99999881e-01   1.13584798e-24   1.88497406e-24
    9.03659680e-22   1.33624795e-07   3.65545186e-13   8.84665658e-20
    5.49036736e-30   0.00000000e+00   0.00000000e+00   6.61433895e-38
    0.00000000e+00   0.00000000e+00   8.96120783e-38   0.00000000e+00
    0.00000000e+00   3.91102760e-37   2.20730951e-38]]
[[ -74.30264282  -67.8707428   -75.80989838  -75.05384827 -122.36791992
   -47.33098602  -63.08833694  -96.42649078 -105.46252441  -90.81938171
   -66.70675659  -37.91890717  -87.1365509   -93.83895874  -46.50045395
   -83.81349182 -126.05667114  -97.83648682  -62.89118958  -82.20462036
   -45.71354675  -66.66020966  -49.13088608  -41.30088043  -39.96826935
     0.          -55.13466263  -54.62812805  -48.45558929  -15.8282299
   -28.63738632  -43.87166214  -67.3745575  -147.03713989  -88.80081177
   -85.60899353  -89.68486023 -134.94213867  -85.30532837 -206.33197021
  -135.11529541  -83.83184814  -86.70645905]]
[[  5.37987471e-33   3.34280350e-30   1.19173417e-33   2.53821134e-33
    0.00000000e+00   2.78236356e-21   3.99101483e-28   0.00000000e+00
    0.00000000e+00   0.00000000e+00   1.07059113e-29   3.40429986e-17
    1.43574308e-38   0.00000000e+00   6.38424257e-21   3.98348258e-37
    0.00000000e+00   0.00000000e+00   4.86075107e-28   1.99061348e-36
    1.40235790e-20   1.12160187e-29   4.59967603e-22   1.15679254e-18
    4.38531828e-18   9.99999881e-01   1.13584798e-24   1.88497406e-24
    9.03659680e-22   1.33624795e-07   3.65545186e-13   8.84665658e-20
    5.49036736e-30   0.00000000e+00   0.00000000e+00   6.61433895e-38
    0.00000000e+00   0.00000000e+00   8.96120783e-38   0.00000000e+00
    0.00000000e+00   3.91102760e-37   2.20730951e-38]]
[[  5.37987471e-33   3.34280350e-30   1.19173417e-33   2.53821134e-33
    0.00000000e+00   2.78236356e-21   3.99101483e-28   1.32562835e-42
    0.00000000e+00   3.61113213e-40   1.07059113e-29   3.40429986e-17
    1.43574308e-38   1.76297360e-41   6.38424257e-21   3.98348258e-37
    0.00000000e+00   3.23699945e-43   4.86075107e-28   1.99061348e-36
    1.40235790e-20   1.12160187e-29   4.59967603e-22   1.15679254e-18
    4.38531828e-18   9.99999881e-01   1.13584798e-24   1.88497406e-24
    9.03659680e-22   1.33624795e-07   3.65545186e-13   8.84665658e-20
    5.49036736e-30   0.00000000e+00   2.71830042e-39   6.61433951e-38
    1.12294874e-39   0.00000000e+00   8.96120783e-38   0.00000000e+00
    0.00000000e+00   3.91102760e-37   2.20730951e-38]]
Model restored.
[[  -7.82431078   25.86413002   21.49586868  -19.07466698   22.82671356
   -16.15687561  -62.38026047  -41.93112946  -51.56542206  -39.6037941
   -17.32771301  -20.74669266  -24.78863144  -45.14335251   -6.52255678
   -22.52535629 -101.71257019  -34.1067276    -5.53638029  -21.48453712
   -45.35949326  -16.63903046  -32.56567764  -15.7568655   -24.4139595
    19.57560158  -22.17779922   -4.61328936  -37.07063675  -19.93784332
   -50.25498962   40.62730408  -51.09144592  -53.05588913  -83.05834198
   -45.25888443  -57.22700882  -51.96762085  -33.31456757  -51.49248886
   -30.85165215  -70.02706146  -84.49803925]]
[[  9.07258602e-22   3.87645542e-07   4.91273155e-09   1.17967515e-26
    1.85909510e-08   2.18244828e-25   0.00000000e+00   1.39742245e-36
    0.00000000e+00   1.43244605e-35   6.76792798e-26   2.21621419e-27
    3.89240794e-29   5.62697900e-38   3.33484508e-21   3.74238040e-28
    0.00000000e+00   3.49479981e-33   8.94059134e-21   1.05967034e-27
    4.53321791e-38   1.34755586e-25   1.63189958e-32   3.25585994e-25
    5.66160630e-29   7.20048243e-10   5.29773254e-28   2.25040124e-20
    1.80390858e-34   4.97611610e-27   0.00000000e+00   9.99999642e-01
    0.00000000e+00   0.00000000e+00   0.00000000e+00   5.01307015e-38
    0.00000000e+00   0.00000000e+00   7.71708001e-33   0.00000000e+00
    9.05905183e-32   0.00000000e+00   0.00000000e+00]]
[[ -48.45161438  -14.76317406  -19.13143539  -59.70197296  -17.80059052
   -56.78417969 -103.00756836  -82.55843353  -92.19272614  -80.23109436
   -57.95501709  -61.37399673  -65.41593933  -85.7706604   -47.14986038
   -63.15266037 -142.33987427  -74.73403168  -46.16368484  -62.11183929
   -85.98680115  -57.26633453  -73.19297791  -56.38417053  -65.04125977
   -21.0517025   -62.8051033   -45.24059296  -77.69793701  -60.5651474
   -90.8822937     0.          -91.71875     -93.68319702 -123.68564606
   -85.88618469  -97.85430908  -92.59492493  -73.94187164  -92.11979675
   -71.47895813 -110.65436554 -125.12534332]]
[[  9.07258602e-22   3.87645542e-07   4.91273155e-09   1.17967515e-26
    1.85909510e-08   2.18244828e-25   0.00000000e+00   1.39742245e-36
    0.00000000e+00   1.43244605e-35   6.76792798e-26   2.21621419e-27
    3.89240794e-29   5.62697900e-38   3.33484508e-21   3.74238040e-28
    0.00000000e+00   3.49479981e-33   8.94059134e-21   1.05967034e-27
    4.53321791e-38   1.34755586e-25   1.63189958e-32   3.25585994e-25
    5.66160630e-29   7.20048243e-10   5.29773254e-28   2.25040124e-20
    1.80390858e-34   4.97611610e-27   0.00000000e+00   9.99999642e-01
    0.00000000e+00   0.00000000e+00   0.00000000e+00   5.01307015e-38
    0.00000000e+00   0.00000000e+00   7.71708001e-33   0.00000000e+00
    9.05905183e-32   0.00000000e+00   0.00000000e+00]]
[[  9.07258602e-22   3.87645571e-07   4.91273155e-09   1.17967515e-26
    1.85909510e-08   2.18244828e-25   1.40129846e-45   1.39742245e-36
    9.14557443e-41   1.43244605e-35   6.76792798e-26   2.21621419e-27
    3.89240794e-29   5.62697900e-38   3.33484508e-21   3.74238040e-28
    0.00000000e+00   3.49479981e-33   8.94059134e-21   1.05967034e-27
    4.53321791e-38   1.34755574e-25   1.63189958e-32   3.25585994e-25
    5.66160630e-29   7.20048243e-10   5.29773254e-28   2.25040124e-20
    1.80390858e-34   4.97611610e-27   3.39094610e-40   9.99999642e-01
    1.46910730e-40   2.06018900e-41   0.00000000e+00   5.01307015e-38
    3.18094751e-43   6.11694806e-41   7.71708001e-33   9.83739548e-41
    9.05905183e-32   0.00000000e+00   0.00000000e+00]]
Model restored.
[[-28.14983368 -11.96170998 -11.49895668  -8.31226349 -31.98130035
   -1.21424711 -14.5142374  -33.03733444 -43.01654816 -12.29152393
  -17.09972    -24.76154518  -7.49419498  -1.42693329 -23.30437851
  -17.51829147 -32.40428543 -28.08930016 -11.27370453 -17.43946838
   -5.02072096 -34.81420517 -30.00572014  -3.59975886 -28.42192841
  -14.06882763 -24.86172295 -31.04078865 -20.68908501 -32.9091568
  -21.33503151 -20.69610023 -11.9690876   -3.92567754   0.46205124
    3.73590851   2.78341055   8.36896515  21.68542862 -12.14042187
   13.932024     1.11543417 -15.61598682]]
[[  2.27317805e-22   2.43806575e-15   3.87272484e-15   9.37520461e-14
    4.92774941e-24   1.13399262e-10   1.89888256e-16   1.71403185e-24
    7.94510681e-29   1.75311447e-15   1.43099086e-17   6.73205465e-21
    2.12453441e-13   9.16730442e-11   2.89059945e-20   9.41572164e-18
    3.22811152e-24   2.41503619e-22   4.85113387e-15   1.01879311e-17
    2.52045693e-12   2.89957067e-25   3.55330940e-23   1.04374833e-11
    1.73166644e-22   2.96439641e-16   6.09035556e-21   1.26213801e-23
    3.95181195e-19   1.94843398e-24   2.07139829e-19   3.92417119e-19
    2.42014487e-15   7.53443228e-12   6.06201866e-10   1.60116240e-08
    6.17690521e-09   1.64644143e-06   9.99569237e-01   2.03907046e-15
    4.29093605e-04   1.16513965e-09   6.30978268e-17]]
[[-49.8352623  -33.6471405  -33.18438721 -29.99769211 -53.66672897
  -22.89967537 -36.19966507 -54.72276306 -64.70198059 -33.9769516
  -38.78514862 -46.44697571 -29.17962265 -23.11236191 -44.98980713
  -39.20372009 -54.08971405 -49.77472687 -32.95913315 -39.124897
  -26.70615005 -56.49963379 -51.69114685 -25.28518677 -50.10735703
  -35.7542572  -46.54714966 -52.72621918 -42.37451172 -54.59458542
  -43.02046204 -42.38153076 -33.65451813 -25.61110687 -21.22337723
  -17.94952011 -18.90201759 -13.31646347   0.         -33.82585144
   -7.75340462 -20.56999397 -37.30141449]]
[[  2.27317805e-22   2.43806575e-15   3.87272484e-15   9.37520461e-14
    4.92774941e-24   1.13399262e-10   1.89888256e-16   1.71403185e-24
    7.94510681e-29   1.75311447e-15   1.43099086e-17   6.73205465e-21
    2.12453441e-13   9.16730442e-11   2.89059945e-20   9.41572164e-18
    3.22811152e-24   2.41503619e-22   4.85113387e-15   1.01879311e-17
    2.52045693e-12   2.89957067e-25   3.55330940e-23   1.04374833e-11
    1.73166644e-22   2.96439641e-16   6.09035556e-21   1.26213801e-23
    3.95181195e-19   1.94843398e-24   2.07139829e-19   3.92417119e-19
    2.42014487e-15   7.53443228e-12   6.06201866e-10   1.60116240e-08
    6.17690521e-09   1.64644143e-06   9.99569237e-01   2.03907046e-15
    4.29093605e-04   1.16513965e-09   6.30978268e-17]]
[[  2.27317805e-22   2.43806575e-15   3.87272442e-15   9.37520461e-14
    4.92774941e-24   1.13399262e-10   1.89888256e-16   1.71403185e-24
    7.94510681e-29   1.75311457e-15   1.43099086e-17   6.73205465e-21
    2.12453441e-13   9.16730442e-11   2.89059945e-20   9.41572164e-18
    3.22811152e-24   2.41503619e-22   4.85113345e-15   1.01879311e-17
    2.52045693e-12   2.89957067e-25   3.55330940e-23   1.04374833e-11
    1.73166644e-22   2.96439641e-16   6.09035556e-21   1.26213801e-23
    3.95181195e-19   1.94843398e-24   2.07139829e-19   3.92417119e-19
    2.42014487e-15   7.53443228e-12   6.06201866e-10   1.60116240e-08
    6.17690521e-09   1.64644143e-06   9.99569237e-01   2.03907046e-15
    4.29093605e-04   1.16513965e-09   6.30978268e-17]]
Model restored.
[[ -11.75426865   -3.37102842  -30.25488091  -36.88495636  -36.98577881
   -21.46168709  -18.78510857  -25.77397537  -25.1655941   -21.95349503
    -8.73413944    2.78701353   21.10884857  -12.31124687   25.53054619
   -35.97303772  -41.87546539   24.11416626  -37.53440857  -24.20423698
   -19.09574509    3.42394972  -39.31629944  -22.85432434  -48.59149551
   -11.89327335   -5.12298393  -52.41733551  -18.16457176  -22.20744514
     3.91563416   -8.27121449  -26.41580391  -73.32080078  -55.45255661
   -55.63940811  -65.78657532  -59.20338821  -60.35372925 -113.50419617
   -67.0149231   -36.26018524  -53.4168663 ]]
[[  5.11569486e-17   2.23716797e-13   4.72269577e-25   6.23424679e-28
    5.63634232e-28   3.11189931e-21   4.52323168e-20   4.17083856e-23
    7.66371238e-23   1.90299181e-21   1.04840444e-15   1.05706728e-10
    9.57578234e-03   2.93096892e-17   7.97063947e-01   1.55176239e-27
    4.24066038e-30   1.93360239e-01   3.25634550e-28   2.00424716e-22
    3.31544588e-20   1.99857533e-10   5.48106722e-29   7.73054405e-22
    5.13689011e-33   4.45178746e-17   3.88002212e-14   1.11984376e-34
    8.41287726e-20   1.47620589e-21   3.26780519e-10   1.66561490e-15
    2.19522966e-23   0.00000000e+00   5.38244184e-36   4.46509527e-36
    0.00000000e+00   1.26477668e-37   4.00336734e-38   0.00000000e+00
    5.12272680e-41   1.16444411e-27   4.12162413e-35]]
[[ -37.28481293  -28.90157509  -55.78542709  -62.41550446  -62.5163269
   -46.99223328  -44.31565475  -51.30451965  -50.69614029  -47.48403931
   -34.26468658  -22.74353218   -4.42169762  -37.84179306    0.
   -61.50358582  -67.40601349   -1.41637993  -63.06495667  -49.73478317
   -44.62628937  -22.10659599  -64.84684753  -48.38487244  -74.12203979
   -37.4238205   -30.65353012  -77.94788361  -43.69511795  -47.73799133
   -21.61491203  -33.80176163  -51.9463501   -98.85134888  -80.98310089
   -81.16995239  -91.31712341  -84.7339325   -85.88427734 -139.03474426
   -92.54547119  -61.79073334  -78.94741058]]
[[  5.11569486e-17   2.23716797e-13   4.72269577e-25   6.23424679e-28
    5.63634232e-28   3.11189931e-21   4.52323168e-20   4.17083856e-23
    7.66371238e-23   1.90299181e-21   1.04840444e-15   1.05706728e-10
    9.57578234e-03   2.93096892e-17   7.97063947e-01   1.55176239e-27
    4.24066038e-30   1.93360239e-01   3.25634550e-28   2.00424716e-22
    3.31544588e-20   1.99857533e-10   5.48106722e-29   7.73054405e-22
    5.13689011e-33   4.45178746e-17   3.88002212e-14   1.11984376e-34
    8.41287726e-20   1.47620589e-21   3.26780519e-10   1.66561490e-15
    2.19522966e-23   0.00000000e+00   5.38244184e-36   4.46509527e-36
    0.00000000e+00   1.26477668e-37   4.00336734e-38   0.00000000e+00
    5.12272680e-41   1.16444411e-27   4.12162413e-35]]
[[  5.11569486e-17   2.23716797e-13   4.72269626e-25   6.23424679e-28
    5.63634232e-28   3.11189971e-21   4.52323168e-20   4.17083887e-23
    7.66371238e-23   1.90299181e-21   1.04840444e-15   1.05706735e-10
    9.57578328e-03   2.93096925e-17   7.97063947e-01   1.55176239e-27
    4.24066001e-30   1.93360239e-01   3.25634550e-28   2.00424716e-22
    3.31544588e-20   1.99857533e-10   5.48106722e-29   7.73054455e-22
    5.13689011e-33   4.45178779e-17   3.88002212e-14   1.11984388e-34
    8.41287726e-20   1.47620599e-21   3.26780519e-10   1.66561512e-15
    2.19522982e-23   9.38869971e-44   5.38244184e-36   4.46509563e-36
    1.74973133e-40   1.26477668e-37   4.00336762e-38   0.00000000e+00
    5.12272680e-41   1.16444411e-27   4.12162442e-35]]
Model restored.
[[ -12.57091618   10.3445015    -6.57058191   -1.54402006  -27.88276482
   -12.43884277  -37.55085754  -17.77239609  -37.9994545   -17.3972187
   -21.29229927  -26.87566376  -25.11045265    6.27810526   19.97163582
    -0.81125069  -63.36618042  -13.54694557  -44.59629822  -57.08750916
   -40.04671478  -27.48847961  -10.88761902  -30.23038673  -37.08879852
    -2.24636817  -27.66255188  -56.78036118  -13.57705021    0.283178
   -31.11232185  -34.50674057  -18.42611885  -75.52284241  -41.19268417
   -27.76871109  -28.00871086  -77.78382874  -32.51195526 -101.19054413
   -66.94558716  -38.93160248  -59.7183876 ]]
[[  7.36071038e-15   6.59112557e-05   2.97051945e-12   4.52731214e-10
    1.64842334e-21   8.40000880e-15   1.04296631e-25   4.05457466e-17
    6.65955883e-26   5.90042953e-17   1.20024962e-18   4.51281994e-21
    2.63674822e-20   1.12965529e-06   9.99933004e-01   9.42058653e-10
    6.40940914e-37   2.77354967e-15   9.08812456e-29   3.41672701e-34
    8.59672881e-27   2.44514993e-21   3.96247739e-14   1.57583056e-22
    1.65554109e-25   2.24292251e-10   2.05450338e-21   4.64516659e-34
    2.69129472e-15   2.81437829e-09   6.52363924e-23   2.18933297e-24
    2.10881227e-17   0.00000000e+00   2.73302382e-27   1.84756969e-21
    1.45335300e-21   0.00000000e+00   1.60930275e-23   0.00000000e+00
    1.78772782e-38   2.62192559e-26   2.46055275e-35]]
[[ -32.54255295   -9.62713432  -26.54221725  -21.51565552  -47.85440063
   -32.41047668  -57.52249146  -37.74403381  -57.97109222  -37.36885452
   -41.26393509  -46.84729767  -45.08208847  -13.69353104    0.
   -20.78288651  -83.33781433  -33.51858139  -64.56793213  -77.05914307
   -60.01834869  -47.46011353  -30.85925484  -50.20202255  -57.06043243
   -22.21800423  -47.63418579  -76.7519989   -33.54868698  -19.68845749
   -51.08395767  -54.4783783   -38.39775467  -95.49447632  -61.1643219
   -47.74034882  -47.98034668  -97.75546265  -52.48358917 -121.16217804
   -86.91722107  -58.90323639  -79.69002533]]
[[  7.36071038e-15   6.59112557e-05   2.97051945e-12   4.52731214e-10
    1.64842334e-21   8.40000880e-15   1.04296631e-25   4.05457466e-17
    6.65955883e-26   5.90042953e-17   1.20024962e-18   4.51281994e-21
    2.63674822e-20   1.12965529e-06   9.99933004e-01   9.42058653e-10
    6.40940914e-37   2.77354967e-15   9.08812456e-29   3.41672701e-34
    8.59672881e-27   2.44514993e-21   3.96247739e-14   1.57583056e-22
    1.65554109e-25   2.24292251e-10   2.05450338e-21   4.64516659e-34
    2.69129472e-15   2.81437829e-09   6.52363924e-23   2.18933297e-24
    2.10881227e-17   0.00000000e+00   2.73302382e-27   1.84756969e-21
    1.45335300e-21   0.00000000e+00   1.60930275e-23   0.00000000e+00
    1.78772782e-38   2.62192559e-26   2.46055275e-35]]
[[  7.36070954e-15   6.59112484e-05   2.97051923e-12   4.52731158e-10
    1.64842313e-21   8.40000710e-15   1.04296619e-25   4.05457433e-17
    6.65955822e-26   5.90042886e-17   1.20024951e-18   4.51281954e-21
    2.63674790e-20   1.12965517e-06   9.99932885e-01   9.42058653e-10
    6.40940869e-37   2.77354946e-15   9.08812396e-29   3.41672655e-34
    8.59672804e-27   2.44514953e-21   3.96247705e-14   1.57583044e-22
    1.65554097e-25   2.24292224e-10   2.05450318e-21   4.64516613e-34
    2.69129451e-15   2.81437784e-09   6.52363861e-23   2.18933277e-24
    2.10881194e-17   3.36732021e-42   2.73302363e-27   1.84756949e-21
    1.45335280e-21   3.51725915e-43   1.60930244e-23   0.00000000e+00
    1.78772768e-38   2.62192528e-26   2.46055247e-35]]
Model restored.
[[-34.38079071 -26.00427055 -30.04554558 -65.0872879  -28.75999069
  -40.4491806  -54.69981384 -41.3400116  -70.99791718 -50.16543961
  -47.57801437  16.38295555 -20.87975883 -60.58649445 -55.60625076
  -74.26648712 -83.94475555 -45.25937271  57.1071701    1.45774877
  -21.84136772   8.92357063 -24.1819973    3.38082266  13.56094742
    6.95376825  30.72728729  42.02160645 -15.40322781   2.84510469
    9.23585224   5.47097874 -37.25094223 -59.31463242 -55.51913452
  -42.83091736 -43.58178329  -7.51160145 -33.5004425  -59.50829315
  -29.3999691  -40.99460602 -46.78245163]]
[[  0.00000000e+00   8.03820403e-37   1.41271835e-38   0.00000000e+00
    5.10937130e-38   0.00000000e+00   0.00000000e+00   0.00000000e+00
    0.00000000e+00   0.00000000e+00   0.00000000e+00   2.05920265e-18
    1.35115809e-34   0.00000000e+00   0.00000000e+00   0.00000000e+00
    0.00000000e+00   0.00000000e+00   9.99999762e-01   6.78833298e-25
    5.16517154e-35   1.18611752e-21   4.97235617e-36   4.64454361e-24
    1.22494243e-19   1.65445463e-22   3.49431747e-12   2.80816664e-07
    3.22948882e-32   2.71821687e-24   1.62087788e-21   3.75566201e-23
    0.00000000e+00   0.00000000e+00   0.00000000e+00   0.00000000e+00
    0.00000000e+00   8.63819001e-29   0.00000000e+00   0.00000000e+00
    2.69417538e-38   2.48029828e-43   1.40129846e-45]]
[[ -91.48796082  -83.11144257  -87.15271759 -122.19445801  -85.86715698
   -97.55635071 -111.80698395  -98.4471817  -128.10508728 -107.27261353
  -104.68518066  -40.72421265  -77.98693085 -117.69366455 -112.71342468
  -131.37365723 -141.05192566 -102.36654663    0.          -55.64942169
   -78.94853973  -48.18360138  -81.28916931  -53.72634888  -43.54622269
   -50.15340042  -26.37988281  -15.08556366  -72.51039886  -54.26206589
   -47.87131882  -51.63619232  -94.35810852 -116.42179871 -112.62630463
   -99.93808746 -100.68894958  -64.61877441  -90.60761261 -116.61546326
   -86.50714111  -98.10177612 -103.88961792]]
[[  0.00000000e+00   8.03820403e-37   1.41271835e-38   0.00000000e+00
    5.10937130e-38   0.00000000e+00   0.00000000e+00   0.00000000e+00
    0.00000000e+00   0.00000000e+00   0.00000000e+00   2.05920265e-18
    1.35115809e-34   0.00000000e+00   0.00000000e+00   0.00000000e+00
    0.00000000e+00   0.00000000e+00   9.99999762e-01   6.78833298e-25
    5.16517154e-35   1.18611752e-21   4.97235617e-36   4.64454361e-24
    1.22494243e-19   1.65445463e-22   3.49431747e-12   2.80816664e-07
    3.22948882e-32   2.71821687e-24   1.62087788e-21   3.75566201e-23
    0.00000000e+00   0.00000000e+00   0.00000000e+00   0.00000000e+00
    0.00000000e+00   8.63819001e-29   0.00000000e+00   0.00000000e+00
    2.69417538e-38   2.48029828e-43   1.40129846e-45]]
[[  1.85047067e-40   8.03820403e-37   1.41271835e-38   0.00000000e+00
    5.10937130e-38   4.28797330e-43   0.00000000e+00   1.75162308e-43
    0.00000000e+00   0.00000000e+00   0.00000000e+00   2.05920265e-18
    1.35115809e-34   0.00000000e+00   0.00000000e+00   0.00000000e+00
    0.00000000e+00   2.80259693e-45   9.99999762e-01   6.78833298e-25
    5.16517154e-35   1.18611742e-21   4.97235617e-36   4.64454361e-24
    1.22494243e-19   1.65445463e-22   3.49431747e-12   2.80816664e-07
    3.22948882e-32   2.71821687e-24   1.62087788e-21   3.75566201e-23
    1.04901203e-41   0.00000000e+00   0.00000000e+00   3.92363570e-44
    1.82168800e-44   8.63819001e-29   4.46286936e-40   0.00000000e+00
    2.69417538e-38   2.48029828e-43   1.40129846e-45]]
In [30]:
from pylab import *
from matplotlib import gridspec
    
def topk_plot(test_images):
    with tf.Session() as sess:
        # Restore variables from disk.
        saver.restore(sess, './models/lenet')
        #saver.restore(sess, os.getcwd() + "/cnn-run")
        print("Model restored.")

        #image_logits=sess.run(logits,feed_dict={x:test_images, keep_prob: 1.0})
        #image_logits -= np.max(image_logits)
        
        #softmax=tf.nn.softmax(logits/tf.reduce_sum(logits,1))
        softmax=tf.nn.softmax(logits)

        #probabilities=logits/tf.reduce_sum(logits,1)
        #probs=sess.run(tf.nn.softmax(image_logits))
        #probs=sess.run(softmax,feed_dict={x:test_images, keep_prob: 1.0})
        probs=sess.run(softmax,feed_dict={x:test_images})
        print("Softmax\n",probs)
        (values,indices)=sess.run(tf.nn.top_k(probs, k = 5))
        print("top_k values, indices")
        print(values,indices)

    indices_names=np.array([signnames_df.iloc[i].sign_name for i in indices.reshape(25)])
    image_values=np.array([p for p in zip(indices_names,values.reshape(25))])

    image_values=image_values.reshape(5,5,2)

    pos = arange(5) + .5    # the bar centers on the y axis

    fig = plt.figure(figsize=(10,6))
    gs = gridspec.GridSpec(len(test_images),2,width_ratios=[1,5])

    for i in range(len(image_values)):
        values=[float(v[1]) for v in image_values[i]]
        labels=[v[0] for v in image_values[i]]

        plt.subplot(gs[i*2])
        plt.imshow(test_images[i])
        plt.xticks([]), plt.yticks([])
        plt.subplot(gs[i*2+1])
        plt.tight_layout()
        barh(-pos,values, align='center')
        yticks(-pos, labels)
        xlabel('softmax probabilities')

    show()
In [31]:
# plot top_k of 5 test images - for reference
#for i in range (7):
topk_plot(X_predict[0:5])
Model restored.
Softmax
 [[  1.40493289e-13   9.40247435e-10   1.83321580e-01   3.60749621e-19
    1.04036848e-27   1.62403149e-17   7.27285394e-39   1.14443823e-08
    4.80652322e-35   7.19907463e-01   7.55974352e-02   1.22915549e-11
    2.11232677e-02   4.78611535e-23   1.89154790e-19   1.59042512e-28
    2.71939030e-06   4.91644370e-09   4.57700043e-18   1.02427331e-10
    2.78602319e-09   2.10042063e-15   1.61997445e-37   7.42683345e-12
    1.09985534e-39   9.16898355e-25   3.58922714e-26   4.79954623e-26
    2.34249330e-21   5.16520164e-31   2.14226815e-19   4.74248627e-05
    5.99034870e-22   8.12753109e-44   0.00000000e+00   6.70836718e-35
    7.81476027e-31   2.80010395e-20   1.68985229e-33   1.44900861e-38
    1.05505210e-12   2.34025889e-11   6.84947472e-21]
 [  5.37983357e-33   3.34280350e-30   1.19174326e-33   2.53819187e-33
    0.00000000e+00   2.78232115e-21   3.99101483e-28   1.32562835e-42
    0.00000000e+00   3.61107608e-40   1.07059113e-29   3.40429986e-17
    1.43574294e-38   1.76297360e-41   6.38421874e-21   3.98342204e-37
    0.00000000e+00   3.23699945e-43   4.86067644e-28   1.99061348e-36
    1.40233642e-20   1.12158479e-29   4.59964069e-22   1.15679244e-18
    4.38526824e-18   9.99999881e-01   1.13584364e-24   1.88496696e-24
    9.03656247e-22   1.33623260e-07   3.65541717e-13   8.84668954e-20
    5.49036736e-30   0.00000000e+00   2.71830042e-39   6.61433895e-38
    1.12294874e-39   0.00000000e+00   8.96120783e-38   0.00000000e+00
    0.00000000e+00   3.91102760e-37   2.20730951e-38]
 [  9.07268902e-22   3.87649266e-07   4.91272178e-09   1.17968417e-26
    1.85913400e-08   2.18246480e-25   1.40129846e-45   1.39743312e-36
    9.14571456e-41   1.43244591e-35   6.76805679e-26   2.21620572e-27
    3.89243743e-29   5.62706476e-38   3.33487032e-21   3.74236596e-28
    0.00000000e+00   3.49482662e-33   8.94062527e-21   1.05967429e-27
    4.53328685e-38   1.34756609e-25   1.63189958e-32   3.25584737e-25
    5.66160630e-29   7.20048243e-10   5.29773205e-28   2.25040996e-20
    1.80389480e-34   4.97611610e-27   3.39094610e-40   9.99999642e-01
    1.46910730e-40   2.06018900e-41   0.00000000e+00   5.01307015e-38
    3.18094751e-43   6.11694806e-41   7.71702050e-33   9.83753561e-41
    9.05905125e-32   0.00000000e+00   0.00000000e+00]
 [  2.27315205e-22   2.43806575e-15   3.87272442e-15   9.37523984e-14
    4.92769301e-24   1.13399047e-10   1.89888984e-16   1.71401863e-24
    7.94504602e-29   1.75311436e-15   1.43099070e-17   6.73207969e-21
    2.12454647e-13   9.16726972e-11   2.89059945e-20   9.41564885e-18
    3.22813597e-24   2.41503645e-22   4.85107797e-15   1.01878923e-17
    2.52046170e-12   2.89959261e-25   3.55326870e-23   1.04375232e-11
    1.73164650e-22   2.96439641e-16   6.09028568e-21   1.26213319e-23
    3.95178171e-19   1.94841169e-24   2.07139829e-19   3.92418593e-19
    2.42014487e-15   7.53446090e-12   6.06200701e-10   1.60117146e-08
    6.17692919e-09   1.64643677e-06   9.99569237e-01   2.03905479e-15
    4.29093983e-04   1.16514631e-09   6.30978268e-17]
 [  5.11581265e-17   2.23717258e-13   4.72267900e-25   6.23415241e-28
    5.63634280e-28   3.11188820e-21   4.52318095e-20   4.17077545e-23
    7.66368398e-23   1.90295586e-21   1.04839258e-15   1.05705736e-10
    9.57565662e-03   2.93093616e-17   7.97064066e-01   1.55176249e-27
    4.24062841e-30   1.93360269e-01   3.25632094e-28   2.00422444e-22
    3.31537059e-20   1.99854508e-10   5.48106722e-29   7.73048599e-22
    5.13685227e-33   4.45172029e-17   3.87994114e-14   1.11981828e-34
    8.41284624e-20   1.47622871e-21   3.26778660e-10   1.66561512e-15
    2.19522998e-23   9.38869971e-44   5.38240094e-36   4.46499375e-36
    1.74968929e-40   1.26475751e-37   4.00330680e-38   0.00000000e+00
    5.12272680e-41   1.16443544e-27   4.12159343e-35]]
top_k values, indices
[[  7.19907463e-01   1.83321580e-01   7.55974352e-02   2.11232677e-02
    4.74248627e-05]
 [  9.99999881e-01   1.33623260e-07   3.65541717e-13   3.40429986e-17
    4.38526824e-18]
 [  9.99999642e-01   3.87649266e-07   1.85913400e-08   4.91272178e-09
    7.20048243e-10]
 [  9.99569237e-01   4.29093983e-04   1.64643677e-06   1.60117146e-08
    6.17692919e-09]
 [  7.97064066e-01   1.93360269e-01   9.57565662e-03   3.26778660e-10
    1.99854508e-10]] [[ 9  2 10 12 31]
 [25 29 30 11 24]
 [31  1  4  2 25]
 [38 40 37 35 36]
 [14 17 12 30 21]]
In [ ]:


Step 4: Visualize the Neural Network's State with Test Images

This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.

Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.

For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.

Combined Image

Your output should look something like this (above)

In [32]:
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.

# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry

def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
    # Here make sure to preprocess your image_input in a way your network expects
    # with size, normalization, ect if needed
    # image_input =
    # Note: x should be the same name as your network's tensorflow data placeholder variable
    # If you get an error tf_activation is not defined it maybe having trouble accessing the variable from inside a function
    activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
    featuremaps = activation.shape[3]
    plt.figure(plt_num, figsize=(15,15))
    for featuremap in range(featuremaps):
        plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
        plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
        if activation_min != -1 & activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
        elif activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
        elif activation_min !=-1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
        else:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")

Question 9

Discuss how you used the visual output of your trained network's feature maps to show that it had learned to look for interesting characteristics in traffic sign images

Answer:

Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

Project Writeup

Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.